XClose

Gatsby Computational Neuroscience Unit

Home
Menu

Brooks Paige

 

Monday 25th March 2019

 

Time: 4.00pm

 

Ground Floor Seminar Room

25 Howland Street, London, W1T 4JG

 

Semi-interpretable probabilistic models

Statistical models such as linear regression, additive models, and Bayesian networks have seen widespread adoption in part due to their inherent interpretability and transparency. In contrast, modern machine learning methods tend to operate as black boxes; although they may make accurate predictions from data, they cannot easily be audited by their users, leading to an overall lack of trust. This dichotomization leads to difficult decisions: when do the benefits of transparency outweigh any unexplained variance left on the table? In this talk I will explore ways in which deep learning models can be used as small components within a larger interpretable model to build grey-box or semi-interpretable models, primarily with an eye towards structured deep generative models and implementation in probabilistic programming languages. I will illustrate this with two ongoing applications: one incorporating street view imagery in modeling London house prices, and one predicting outcomes of organic chemical reactions.

Biography
Brooks Paige is a Research Fellow at the Alan Turing Institute, affiliated with the University of Cambridge. His research is at the intersection of machine learning and Bayesian statistics, with a particular focus on the design of probabilistic programming languages which can represent complex generative models, and the development of scalable methods for automatic inference. He holds a D.Phil in Engineering Science from the University of Oxford, where he was supervised by Frank Wood; an M.A. in Statistics from Columbia University; and a B.A. in Mathematics from Amherst College.